perm filename LOGIC.SLI[S86,JMC]1 blob
sn#817292 filedate 1986-05-13 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00028 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00004 00002 %logic.sli[s86,jmc] Slides for DARPA lecture
C00005 00003 \centerline{\bf Feigenbaum's question:}
C00006 00004 \centerline{\bf Effects of actions in achieving goals}
C00007 00005 \centerline{\bf What is Common Sense?}
C00008 00006 \centerline{\bf Programs with common sense -- 1958}
C00009 00007 \centerline{\bf What is Artificial Intelligence?}
C00012 00008 \centerline {\bf Formalized non-monotonic reasoning (1978):}
C00013 00009 \centerline{\bf Common sense data base:}
C00014 00010 \centerline{\bf Logic Programming}
C00015 00011 \centerline{\bf Compromising generality for efficiency}
C00016 00012 \centerline{\bf Why non-monotonic reasoning?}
C00017 00013 \centerline {\bf Formula Circumscription:}
C00018 00014 \centerline{\bf Moving and painting blocks --- 1960s situation calculus}
C00020 00015 \centerline{\bf Moving and painting blocks using circumscription}
C00022 00016
C00024 00017 \centerline{\bf Difficulties and Compromises with Situation Calculus}
C00026 00018 \centerline{\bf 1980s trends and goals}
C00027 00019 \centerline{\bf 1970s Expert Systems --- MYCIN}
C00028 00020 \centerline{\bf MYCIN}
C00030 00021 \centerline{\bf Why logic in AI?}
C00031 00022 \centerline{\bf DoD and DARPA Support of AI}
C00032 00023 \centerline{\bf What is mathematical logic?}
C00034 00024 \centerline{\bf Theoretical knowledge}
C00035 00025 \centerline{\bf The epistemology of common sense:}
C00036 00026 \centerline{\bf Areas of common sense knowledge:}
C00037 00027 \centerline{\bf Open questions about common sense:}
C00038 00028 \end
C00039 ENDMK
C⊗;
%logic.sli[s86,jmc] Slides for DARPA lecture
\input slide2.tex[1,jmc]
\centerline{\bf Feigenbaum's question:}
\bigskip
Is common sense just a matter of
a very large number of pattern-action rules?
\bigskip
\centerline{\bf My answer:}
No! Unless the expert system interprets a more
sophisticated knowledge using system. If you do that you still have to
define the more sophisticated system and write your knowledge in its
formalism.
\vfill\eject
\centerline{\bf Effects of actions in achieving goals}
\noindent The subject of the most axiomatic work.
\bigskip
\itemb situation calculus.
$s' = result(e,s)$ is the situation that results
when event {\it e} occurs in situation {\it s}.
\itemb frame problem.
How to avoid specifying everything that doesn't change?
\itemb qualification problem.
How to avoid endless qualifications in axioms?
\vfill\eject
\centerline{\bf What is Common Sense?}
\bigskip
A certain combination of knowledge and reasoning ability required
for successful behavior in complex environments.
\bigskip
\noindent I will discuss three things:
\medskip
\itemb what kind of knowledge?
\itemb examples of common sense knowledge.
\itemb what reasoning abilities?
\vfill\eject
\centerline{\bf Programs with common sense -- 1958}
\vskip .15truein
\noindent Represent as sentences of mathematical logic:
\itemb general facts about the effects of actions and other events
\itemb goals to be achieved
\itemb the principle that actions that achieve goals should be done
\itemb facts about the particular situation
\vskip .15truein
\noindent Deduce a sentence of the form
$should(action)$ and then do {\it action}.
\vskip 0pt
The original Advice Taker plan won't work. Why it won't work wasn't
apparent until something better came along - namely non-monotonic reasoning.
\vfill\eject
\centerline{\bf What is Artificial Intelligence?}
\bigskip
The question is controversial. Here's my opinion.
AI is concerned with how problems can to be solved and goals
achieved in certain kinds of situations. Since it is concerned
with the relations between problems and solution methods, it
is the same whether the problem-solver is human, a machine or
a creature from Alpha-Centauri.
However, we are mainly interested in the same kinds of intellectual
and physical environment for the problem solving as is inhabited
by humans. Moreover, our best current clues as to what methods
work best come from considering human methods. For this reason,
AI has much in common with Cognitive Science, which is specifically
concerned with human intelligent behavior.
\vfill\eject
\centerline {\bf Formalized non-monotonic reasoning (1978):}
\noindent Monotonicity:
If a sentence {\it p} is inferred from a set {\it A} of premisses
and $A ⊂ B$, then {\it p} is inferred from $B$.
Commons sense reasoning includes non-monotonic steps.
\itemb Non-monotonic logic (McDermott and Doyle)
\itemb Default logic (Reiter)
\itemb Circumscription (McCarthy)
\itemb Pointwise Circumscription (Lifschitz)
\vfill\eject
\centerline{\bf Common sense data base:}
Once we have a good common sense reasoning
system and know how to express common sense knowledge in a general way,
we can create a common sense database usable by any
program that requires common sense. Opinions differ
by a factor of 1000 as to how many facts the database
would need to be useful. I think ten thousand, but some
think ten million.
\vfill\eject
\centerline{\bf Logic Programming}
\medskip
Certain formulas of logic (conjunctions of Horn clauses)
can be interpreted directly by a
computer program (Prolog interpreter) to find objects
satisfying these formulas.
\medskip
\noindent Color this map with four colors.
$$\displaylines{n(yellow,blue).\hfill\cr
n(yellow,green).\hfill\cr
n(yellow,red).\hfill\cr
n(blue,yellow).\hfill\cr
\vdots\hfill\cr
n(red,green).\hfill\cr}$$
%
$$\displaylines{goal(R1,\ldots,R6) ←\hfill\cr
\hfill n(R1,R2),\ldots,n(R1,R6),n(R2,R3),\ldots n(R5,R6)\cr.}$$
\vfill
\eject
\centerline{\bf Compromising generality for efficiency}
\bigskip
\itemb Accept shallow knowledge as in MYCIN.
\itemb Put some of the knowledge in the program.
\itemb Limit the domain of the logic as in STRIPS.
\vfill
\eject
\centerline{\bf Why non-monotonic reasoning?}
\bigskip
\itemb The system (like people) needs to assume it knows about all
relevant phenomena.
\itemb The expert system maker can't anticipate all exceptions.
\itemb Example: Unexpectedly, for security reasons, a certain
radar can't be used.
\vfill\eject
\centerline {\bf Formula Circumscription:}
\itemb $A(P)$ is an axiom about a predicate vector $P$.
\itemb $E(P,x)$ is a formula which we want to make false
for a maximal set of values of $x$.
$$A'(P) ≡ A(P) ∧ ∀P'.(A(P') ∧ (∀x.E(P',x) ⊃ E(P,x))$$
$$⊃ (∀x.E(P,x) ≡ E(P',x)))$$
\vfill\eject
\centerline{\bf Moving and painting blocks --- 1960s situation calculus}
\medskip
\overfullrule=0pt
\itemb Qualified result-of-action axioms:
$$\displaylines{∀x l s.clear(top(x),s) ∧ clear(l,s) ∧ ¬tooheavy(x)\hfill\cr
\hfill ⊃ loc(x,result(move(x,l),s)) = l\cr}$$
$$\displaylines{∀x c s.color(x,result(paint(x,c),s)) = c.\hfill\cr}$$
\medskip
\itemb Frame axioms:
$$∀x y l s.color(y,result(move(x,l),s)) = color(y,s).$$
$$∀x y l s.y ≠ x ⊃ loc(y,result(move(x,l),s)) = loc(y,s).$$
$$∀x y c s.loc(x,result(paint(y,c),s)) = loc(x,s).$$
$$∀x y c s.y ≠ x ⊃ color(x,result(paint(y,c),s)) = color(x,s).$$
\medskip
\itemb All qualifications are given in the axioms, and a full statement
is given of what doesn't change when an action occurs.
\vfill\eject
\centerline{\bf Moving and painting blocks using circumscription}
\overfullrule=0pt
\bigskip
\itemb Axioms about locations and the effects of moving objects.
$$∀x e s.¬ab(aspect1(x,e,s)) ⊃ loc(x,result(e,s)) = loc(x,s)$$
$$∀x l s.ab(aspect1(x,move(x,l),s))$$
$$∀x l s.¬ab(aspect3(x,l,s)) ⊃ loc(x,result(move(x,l),s)) = l$$
\bigskip
\itemb Axioms about colors and painting.
$$∀x e s.¬ab(aspect2(x,e,s)) ⊃ color(x,result(e,s)) = color(x,s)$$
$$∀x c s.ab(aspect2(x,paint(x,c),s))$$
$$∀x c s.¬ab(aspect4(x,c,s)) ⊃ color(x,result(paint(x,c),s)) = c$$
\bigskip
\itemb This treats the qualification problem, because any number
of conditions that may be imagined as preventing moving or painting
can be added later and asserted to imply the corresponding $ab\ aspect\ldots$.
\itemb It treats the frame problem in that we don't have to say
that moving doesn't affect colors and painting locations.
\vfill\eject
\centerline{\bf Difficulties and Compromises with Situation Calculus}
\bigskip
\itemb Difficulties
\itemxx Frame problem --- Need to say what doesn't change gives $n↑2$ axioms
for $n$ actions.
\itemxx Qualification problem --- Conditions for performing actions are never
qualified in a fully general way.
\itemxx Combinatorial explosion.
\itemxx Insufficient generality.
\bigskip
\itemb 1970s Compromises for practicality
\itemxx STRIPS uses add and delete lists giving up ability to compare
situations.
\itemxx Accept systems of limited scope.
\itemxx Specialize reasoning system to avoid combinatorial explosion.
\vfill\eject
\centerline{\bf 1980s trends and goals}
\bigskip
\itemb Move knowledge from the program back into the logic.
\itemb Elaborate expert system shells (ART, KEE, OPS-5) to
achieve greater expressiveness while keeping computational efficiency.
\itemb Formalize non-monotonic reasoning, e.g. ATMS.
\bigskip
\itemb Long range ideas
\itemxx Formalized contexts to achieve generality needed for common sense
database.
\itemxx Mental situations
\itemxx Keep the logic general but allow declarative expression of
control information --- e.g. Genesereth's MRS.
\vfill\eject
\centerline{\bf 1970s Expert Systems --- MYCIN}
\bigskip
\itemb Incorporate problem solving knowledge of domain expert.
\itemb Accept limitations needed for easy programmability.
\itemb Accept limitations needed for execution speed.
\bigskip
\noindent {\bf These limitations often include:}
\medskip
\itemb Extreme specialization.
\itemb No planning.
\itemb No fundamental knowledge.
\vfill\eject
\centerline{\bf MYCIN}
\bigskip
\itemb Typical rules (in Charniak and McDermott notation):
$$\displaylines{(← (is\ ?x\ streptococcus\hbox{-}group\hbox{-}a) L1=3\hfill\cr
(and\ (sterile\ (site\ ?x))\cr
\hfill (is\ ?x\ streptococcus)\cr
\hfill (portal\ ?x\ throat).\cr}$$
\medskip
$$\displaylines{(← (is\ ?x\ neisseria) L1=5\hfill\cr
\hfill (and\ (gram\hbox{-}stain\ ?x\ neg)\cr
\hfill (morphology\ ?x\ coccus))).\cr}$$
\medskip
\itemb Limitations:
\itemxx Just one patient.
\itemxx No processes occurring in time.
\itemxx No fundamental knowledge --- e.g. doesn't know bacteria
are organisms that grow and reproduce.
\medskip
\itemb Nevertheless, systems with these limitations are useful.
\vfill\eject
\centerline{\bf Why logic in AI?}
\bigskip
\itemb What humans learn is almost entirely expressed declaratively.
\itemb Logic formalizes correct reasoning.
\itemb Other systems have turned out to be subsets of logic.
\itemb The difficulties in using logic turn up in other systems eventually.
\itemb The difficulties with logic can be overcome.
\vfill\eject
\centerline{\bf DoD and DARPA Support of AI}
\bigskip
\itemb Early M.I.T. work supported via Joint Services contract with R.L.E.
\itemb Early Newell and Simon work supported by Air Force via RAND
\itemb DARPA support of AI started in 1964.
\itemb U.S. support preceded other countries by large margin.
\itemb U.S. leads other countries by large margin.
\bigskip
\itemb Payoff is beginning.
\itemb Breakthrough will probably be in U.S.
\itemb DARPA support still very important.
\vfill\eject
\centerline{\bf What is mathematical logic?}
\bigskip
A language of mathematical logic comprises
rules for forming sentences that can be used to express
information and allowed rules of inference enabling
new sentences to be {\it deduced} from previous sentences.
Mostly AI uses first
order languages in which the {\it predicates} that apply to {\it objects}
are not themselves objects.
Theorem proving programs
{\it try} to prove sentences from premises
or find objects that satisfy formulas.
Making these programs efficient requires new ways
of letting the user control what inferences are made and when.
Some steps in theorem proving and logical problem solving:
\itemb Logic theory machine --- Newell and Simon 1956
\itemb Resolution theorem proving method --- J. A. Robinson 1965
\itemb QA4 --- Cordell Green 1969
\itemb Microplanner --- Carl Hewitt, Terry Winograd 1970
\itemb Logic programming --- Alain Colmerauer 1971
\itemb Formalized non-monotonic reasoning 1978
\vfill
\eject
\centerline{\bf Theoretical knowledge}
\vskip .15truein
\noindent ``A container is sterile if all the bacteria in it are dead''.
\vskip 0pt
The knowledge is theoretical, because we don't use it directly by
\itemb checking each bacterium to see if it is dead
\itemb or knocking each bacterium on the head to kill it.
\vskip 0pt
\noindent Instead we put the contents in agar to test it or heat it
to sterilize it.
\vfill\eject
\centerline{\bf The epistemology of common sense:}
\bigskip
\itemb What must a robot know about the common sense world?
\itemb What inferences must it admit?
\vskip .15truein
We can often usefully separate this $epistemological$ problem
from the heuristic problem
of programming the search for useful inferences.
\vfill\eject
\centerline{\bf Areas of common sense knowledge:}
\bigskip
\itemb effects of actions and other events
\itemb relations between appearance and reality
\itemb objects and their parts
\itemb {\it is-a} hierarchies
\itemb knowledge and belief
\itemb vision as a dynamic process
\itemb communication as a process
\itemb natural kinds vs. arbitrary boundaries
\vfill\eject
\centerline{\bf Open questions about common sense:}
\itemb effects of concurrent events
\itemb approximate theories, granularity, levels of detail
\itemb Which facts can be used directly in pattern-action rules?
\itemb Which facts can be used directly as fragments of logic programs?
\itemb When must the same fact be used in various ways?
\itemb expression of heuristic information as facts
\itemb maintaining modularity
\itemb ambiguity tolerance
\itemb elaboration tolerance
\itemb What is the correct formalism for describing modifications to theories?
\vfill\eject
\end